18 research outputs found
On Minimizing the Makespan When Some Jobs Cannot Be Assigned on the Same Machine
We study the classical scheduling problem of assigning jobs to machines in order to minimize the makespan. It is well-studied and admits an EPTAS on identical machines and a (2-1/m)-approximation algorithm on unrelated machines. In this paper we study a variation in which the input jobs are partitioned into bags and no two jobs from the same bag are allowed to be assigned on the same machine. Such a constraint can easily arise, e.g., due to system stability and redundancy considerations. Unfortunately, as we demonstrate in this paper, the techniques of the above results break down in the presence of these additional constraints.
Our first result is a PTAS for the case of identical machines. It enhances the methods from the known (E)PTASs by a finer classification of the input jobs and careful argumentations why a good schedule exists after enumerating over the large jobs. For unrelated machines, we prove that there can be no (log n)^{1/4-epsilon}-approximation algorithm for the problem for any epsilon > 0, assuming that NP nsubseteq ZPTIME(2^{(log n)^{O(1)}}). This holds even in the restricted assignment setting. However, we identify a special case of the latter in which we can do better: if the same set of machines we give an 8-approximation algorithm. It is based on rounding the LP-relaxation of the problem in phases and adjusting the residual fractional solution after each phase to order to respect the bag constraints
On property- and relative Chebyshev centers in Banach spaces-II
We continue to study (strong) property- in Banach spaces. As discussed
by Pai \& Nowroji in [{\it On restricted centers of sets}, J. Approx. Theory,
{\bf 66}(2), 170--189 (1991)], this study corresponds to a triplet
, where is a Banach space, is a closed convex set,
and is a subfamily of closed, bounded subsets of . It is
observed that if is a Lindenstrauss space then has
strong property-, where represents the compact subsets
of . It is established that for any ,
. This extends the well-known fact that a
compact subset of a Lindenstrauss space admits a nonempty Chebyshev center
in . We extend our observation that is Lipschitz
continuous in if is a Lindenstrauss space. If is a
subspace of a Banach space and represents the set of all
finite subsets of then we observe that exhibits the condition for
simultaneously strongly proximinal (viz. property-) in for
if satisfies strong
property-, where represents the set of all finite
subsets of . It is demonstrated that if is a bi-contractive projection
in , then
exhibits the strong property-, where
represents the set of all compact subsets of . Furthermore,
stability results for these properties are derived in continuous function
spaces, which are then studied for various sums in Banach spaces
A Constant Factor Approximation for Capacitated Min-Max Tree Cover
Given a graph G = (V,E) with non-negative real edge lengths and an integer parameter k, the (uncapacitated) Min-Max Tree Cover problem seeks to find a set of at most k trees which together span V and each tree is a subgraph of G. The objective is to minimize the maximum length among all the trees. In this paper, we consider a capacitated generalization of the above and give the first constant factor approximation algorithm. In the capacitated version, there is a hard uniform capacity (?) on the number of vertices a tree can cover. Our result extends to the rooted version of the problem, where we are given a set of k root vertices, R and each of the covering trees is required to include a distinct vertex in R as the root. Prior to our work, the only result known was a (2k-1)-approximation algorithm for the special case when the total number of vertices in the graph is k? [Guttmann-Beck and Hassin, J. of Algorithms, 1997]. Our technique circumvents the difficulty of using the minimum spanning tree of the graph as a lower bound, which is standard for the uncapacitated version of the problem [Even et al.,OR Letters 2004] [Khani et al.,Algorithmica 2010]. Instead, we use Steiner trees that cover ? vertices along with an iterative refinement procedure that ensures that the output trees have low cost and the vertices are well distributed among the trees
Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model
We consider the online scheduling problem to minimize the weighted ell_p-norm of flow-time of jobs. We study this problem under the rejection model introduced by Choudhury et al. (SODA 2015) - here the online algorithm is allowed to not serve an eps-fraction of the requests. We consider the restricted assignments setting where each job can go to a specified subset of machines. Our main result is an immediate dispatch non-migratory 1/eps^{O(1)}-competitive algorithm for this problem when one is allowed to reject at most eps-fraction of the total weight of jobs arriving. This is in contrast with the speed augmentation model under which no online algorithm for this problem can achieve a competitive ratio independent of p
Fair Rank Aggregation
Ranking algorithms find extensive usage in diverse areas such as web search,
employment, college admission, voting, etc. The related rank aggregation
problem deals with combining multiple rankings into a single aggregate ranking.
However, algorithms for both these problems might be biased against some
individuals or groups due to implicit prejudice or marginalization in the
historical data. We study ranking and rank aggregation problems from a fairness
or diversity perspective, where the candidates (to be ranked) may belong to
different groups and each group should have a fair representation in the final
ranking. We allow the designer to set the parameters that define fair
representation. These parameters specify the allowed range of the number of
candidates from a particular group in the top- positions of the ranking.
Given any ranking, we provide a fast and exact algorithm for finding the
closest fair ranking for the Kendall tau metric under block-fairness. We also
provide an exact algorithm for finding the closest fair ranking for the Ulam
metric under strict-fairness, when there are only number of groups. Our
algorithms are simple, fast, and might be extendable to other relevant metrics.
We also give a novel meta-algorithm for the general rank aggregation problem
under the fairness framework. Surprisingly, this meta-algorithm works for any
generalized mean objective (including center and median problems) and any
fairness criteria. As a byproduct, we obtain 3-approximation algorithms for
both center and median problems, under both Kendall tau and Ulam metrics.
Furthermore, using sophisticated techniques we obtain a
-approximation algorithm, for a constant , for
the Ulam metric under strong fairness.Comment: A preliminary version of this paper appeared in NeurIPS 202
Rejecting Jobs to Minimize Load and Maximum Flow-time
Online algorithms are usually analyzed using the notion of competitive ratio
which compares the solution obtained by the algorithm to that obtained by an
online adversary for the worst possible input sequence. Often this measure
turns out to be too pessimistic, and one popular approach especially for
scheduling problems has been that of "resource augmentation" which was first
proposed by Kalyanasundaram and Pruhs. Although resource augmentation has been
very successful in dealing with a variety of objective functions, there are
problems for which even a (arbitrary) constant speedup cannot lead to a
constant competitive algorithm. In this paper we propose a "rejection model"
which requires no resource augmentation but which permits the online algorithm
to not serve an epsilon-fraction of the requests.
The problems considered in this paper are in the restricted assignment
setting where each job can be assigned only to a subset of machines. For the
load balancing problem where the objective is to minimize the maximum load on
any machine, we give O(\log^2 1/\eps)-competitive algorithm which rejects at
most an \eps-fraction of the jobs. For the problem of minimizing the maximum
weighted flow-time, we give an O(1/\eps^4)-competitive algorithm which can
reject at most an \eps-fraction of the jobs by weight. We also extend this
result to a more general setting where the weights of a job for measuring its
weighted flow-time and its contribution towards total allowed rejection weight
are different. This is useful, for instance, when we consider the objective of
minimizing the maximum stretch. We obtain an O(1/\eps^6)-competitive
algorithm in this case.
Our algorithms are immediate dispatch, though they may not be immediate
reject. All these problems have very strong lower bounds in the speed
augmentation model